16 research outputs found

    The Consent Myth: Improving Choice for Patients of the Future

    Get PDF
    Consent has enjoyed a prominent position in the American privacy system since at least 1970, though historically, consent emerged from traditional notions of tort and contract. Largely because consent has an almost deferential power as a proxy for consumer choice, organizations increasingly use consent as a de facto standard for demonstrating privacy commitments. The Department of Health and Human Services and the Federal Trade Commission have integrated the concept of consent into health care, research, and general commercial activities. However, this de facto standard, while useful in some contexts, does not sufficiently promote individual patient interests within leading health technologies, including the Internet of Health Things and Artificial Intelligence. Despite consent’s prominence in United States law, this Article seeks to understand, more fully, consent’s role in modern health applications, then applies a philosophical-legal lens to clearly identify problems with consent in its current use. This Article identifies the principle issues with substituting consent for choice, the “consent myth,” a collection of five problems, then proposes principles for addressing these problems in contemporary health technologies

    Meaningful Choice: A History of Consent and Alternatives to the Consent Myth

    Get PDF
    Although the first legal conceptions of commercial privacy were identified in Samuel Warren and Louis Brandeis’s foundational 1890 article, The Right to Privacy, conceptually, privacy has existed since as early as 1127 as a natural concern when navigating between personal and commercial spheres of life. As an extension of contract and tort law, two common relational legal models, U.S. privacy law emerged to buoy engagement in commercial enterprise, borrowing known legal conventions like consent and assent. Historically, however, international legal privacy frameworks involving consent ultimately diverged, with the European Union taking a more expansive view of legal justification for processing as alternatives to consent. Unfortunately, consent as a procedural substitute for individual choice has created a number of issues in achieving legitimate and effective privacy protections for Americans. The problems with consent as a proxy for choice are well known. This Article explores the twin history of two diverging bodies of law as they apply to the privacy realm, then introduces the concept of legitimate interest balancing as an alternative to consent. Legitimate interest analysis requires an organization formally assess whether data collection and use ultimately result in greater benefit to individuals than the organization with input from actual consumers. This model shifts responsibility from individual consumers having to protect their own interests to organizations that must engage in fair data use practices to legally collect and use data. Finally, this Article positions the model in relation to common law, federal law, Federal Trade Commission activities, and judicial decision-making as a means for separating good-intentioned organizations from unethical ones

    Medical Device Artificial Intelligence: The New Tort Frontier

    Get PDF
    The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient. The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs. Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This Article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model

    Prescribing Exploitation

    Get PDF

    THE HEALTHCARE PRIVACY-ARTIFICIAL INTELLIGENCE IMPASSE

    No full text
    THE HEALTHCARE PRIVACY-ARTIFICIAL INTELLIGENCE IMPASS

    Medical Device Artificial Intelligence: The New Tort Frontier

    No full text
    The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned to reduce climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight, they will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient. The Food and Drug Administration (FDA), the administrative agency responsible for overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs. Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This Article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model

    The Healthcare Privacy-Artificial Intelligence Impasse

    No full text
    With the advent of the Internet, wireless technologies, advanced computing, and, ultimately, the integration of mobile devices into patient care, medical device technologies have revolutionized the healthcare sector. What once was a highly personal, one-to-one relationship between physician and patient has now been expanded, including medical device manufacturers, third party healthcare system providers, even physician-as-a-service for interpreting the data complex systems churn out. The introduction of technology to the healthcare field has, at an ever-increasing rate, transformed human health management. Reworking privacy commitments in an AI world is an important endeavor. It may mean that we reconceptualize what these rights must be against a broader data need. It will likely include investment in better approaches to reduced identifiability that protect patients while promoting data use and sharing that will not reidentify. It may also mean permitting, at least for AI, broader declarations in privacy notices that put patients on notice for the use of AI while also permitting broader use. Without an approach that balances both invention and patient protection, we cannot realize the incredible potential of AI in healthcare

    Meaningful Choice: A History of Consent and Alternatives to the Consent Myth

    No full text
    Although the first legal conceptions of commercial privacy were identified in Samuel Warren and Louis Brandeis’s foundational 1890 article, The Right to Privacy, conceptually, privacy has existed since as early as 1127 as a natural concern when navigating between personal and commercial spheres of life. As an extension of contract and tort law, two common relational legal models, U.S. privacy law emerged to buoy engagement in commercial enterprise, borrowing known legal conventions like consent and assent. Historically, however, international legal privacy frameworks involving consent ultimately diverged, with the European Union taking a more expansive view of legal justification for processing as alternatives to consent. Unfortunately, consent as a procedural substitute for individual choice has created a number of issues in achieving legitimate and effective privacy protections for Americans. The problems with consent as a proxy for choice are well known. This Article explores the twin history of two diverging bodies of law as they apply to the privacy realm, then introduces the concept of legitimate interest balancing as an alternative to consent. Legitimate interest analysis requires an organization formally assess whether data collection and use ultimately result in greater benefit to individuals than the organization with input from actual consumers. This model shifts responsibility from individual consumers having to protect their own interests to organizations that must engage in fair data use practices to legally collect and use data. Finally, this Article positions the model in relation to common law, federal law, Federal Trade Commission activities, and judicial decision-making as a means for separating good-intentioned organizations from unethical ones

    THE HEALTHCARE PRIVACY-ARTIFICIAL INTELLIGENCE IMPASSE

    Get PDF
    THE HEALTHCARE PRIVACY-ARTIFICIAL INTELLIGENCE IMPASS
    corecore